301 research outputs found

    Sharing with Python

    Get PDF

    Uncertainpy: A Python Toolbox for Uncertainty Quantification and Sensitivity Analysis in Computational Neuroscience

    Get PDF
    Computational models in neuroscience typically contain many parameters that are poorly constrained by experimental data. Uncertainty quantification and sensitivity analysis provide rigorous procedures to quantify how the model output depends on this parameter uncertainty. Unfortunately, the application of such methods is not yet standard within the field of neuroscience. Here we present Uncertainpy, an open-source Python toolbox, tailored to perform uncertainty quantification and sensitivity analysis of neuroscience models. Uncertainpy aims to make it quick and easy to get started with uncertainty analysis, without any need for detailed prior knowledge. The toolbox allows uncertainty quantification and sensitivity analysis to be performed on already existing models without needing to modify the model equations or model implementation. Uncertainpy bases its analysis on polynomial chaos expansions, which are more efficient than the more standard Monte-Carlo based approaches. Uncertainpy is tailored for neuroscience applications by its built-in capability for calculating characteristic features in the model output. The toolbox does not merely perform a point-to-point comparison of the “raw” model output (e.g., membrane voltage traces), but can also calculate the uncertainty and sensitivity of salient model response features such as spike timing, action potential width, average interspike interval, and other features relevant for various neural and neural network models. Uncertainpy comes with several common models and features built in, and including custom models and new features is easy. The aim of the current paper is to present Uncertainpy to the neuroscience community in a user-oriented manner. To demonstrate its broad applicability, we perform an uncertainty quantification and sensitivity analysis of three case studies relevant for neuroscience: the original Hodgkin-Huxley point-neuron model for action potential generation, a multi-compartmental model of a thalamic interneuron implemented in the NEURON simulator, and a sparsely connected recurrent network model implemented in the NEST simulator

    Decorrelation of neural-network activity by inhibitory feedback

    Get PDF
    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent theoretical and experimental studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. By means of a linear network model and simulations of networks of leaky integrate-and-fire neurons, we show that shared-input correlations are efficiently suppressed by inhibitory feedback. To elucidate the effect of feedback, we compare the responses of the intact recurrent network and systems where the statistics of the feedback channel is perturbed. The suppression of spike-train correlations and population-rate fluctuations by inhibitory feedback can be observed both in purely inhibitory and in excitatory-inhibitory networks. The effect is fully understood by a linear theory and becomes already apparent at the macroscopic level of the population averaged activity. At the microscopic level, shared-input correlations are suppressed by spike-train correlations: In purely inhibitory networks, they are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II)

    Coarse-to-Fine Changes of Receptive Fields in Lateral Geniculate Nucleus Have a Transient and a Sustained Component That Depend on Distinct Mechanisms

    Get PDF
    Visual processing in the brain seems to provide fast but coarse information before information about fine details. Such dynamics occur also in single neurons at several levels of the visual system. In the dorsal lateral geniculate nucleus (LGN), neurons have a receptive field (RF) with antagonistic center-surround organization, and temporal changes in center-surround organization are generally assumed to be due to a time-lag of the surround activity relative to center activity. Spatial resolution may be measured as the inverse of center size, and in LGN neurons RF-center width changes during static stimulation with durations in the range of normal fixation periods (250–500 ms) between saccadic eye-movements. The RF-center is initially large, but rapidly shrinks during the first ∼100 ms to a rather sustained size. We studied such dynamics in anesthetized cats during presentation (250 ms) of static spots centered on the RF with main focus on the transition from the first transient and highly dynamic component to the second more sustained component. The results suggest that the two components depend on different neuronal mechanisms that operate in parallel and with partial temporal overlap rather than on a continuously changing center-surround balance. Results from mathematical modeling further supported this conclusion. We found that existing models for the spatiotemporal RF of LGN neurons failed to account for our experimental results. The modeling demonstrated that a new model, in which the response is given by a sum of an early transient component and a partially overlapping sustained component, adequately accounts for our experimental data

    Rate Dynamics of Leaky Integrate-and-Fire Neurons with Strong Synapses

    Get PDF
    Firing-rate models provide a practical tool for studying the dynamics of trial- or population-averaged neuronal signals. A wealth of theoretical and experimental studies has been dedicated to the derivation or extraction of such models by investigating the firing-rate response characteristics of ensembles of neurons. The majority of these studies assumes that neurons receive input spikes at a high rate through weak synapses (diffusion approximation). For many biological neural systems, however, this assumption cannot be justified. So far, it is unclear how time-varying presynaptic firing rates are transmitted by a population of neurons if the diffusion assumption is dropped. Here, we numerically investigate the stationary and non-stationary firing-rate response properties of leaky integrate-and-fire neurons receiving input spikes through excitatory synapses with alpha-function shaped postsynaptic currents for strong synaptic weights. Input spike trains are modeled by inhomogeneous Poisson point processes with sinusoidal rate. Average rates, modulation amplitudes, and phases of the period-averaged spike responses are measured for a broad range of stimulus, synapse, and neuron parameters. Across wide parameter regions, the resulting transfer functions can be approximated by a linear first-order low-pass filter. Below a critical synaptic weight, the cutoff frequencies are approximately constant and determined by the synaptic time constants. Only for synapses with unrealistically strong weights are the cutoff frequencies significantly increased. To account for stimuli with larger modulation depths, we combine the measured linear transfer function with the nonlinear response characteristics obtained for stationary inputs. The resulting linear–nonlinear model accurately predicts the population response for a variety of non-sinusoidal stimuli

    Frequency dependence of signal power and spatial reach of the local field potential

    Get PDF
    The first recording of electrical potential from brain activity was reported already in 1875, but still the interpretation of the signal is debated. To take full advantage of the new generation of microelectrodes with hundreds or even thousands of electrode contacts, an accurate quantitative link between what is measured and the underlying neural circuit activity is needed. Here we address the question of how the observed frequency dependence of recorded local field potentials (LFPs) should be interpreted. By use of a well-established biophysical modeling scheme, combined with detailed reconstructed neuronal morphologies, we find that correlations in the synaptic inputs onto a population of pyramidal cells may significantly boost the low-frequency components of the generated LFP. We further find that these low-frequency components may be less `local' than the high-frequency LFP components in the sense that (1) the size of signal-generation region of the LFP recorded at an electrode is larger and (2) that the LFP generated by a synaptically activated population spreads further outside the population edge due to volume conduction
    corecore